282 research outputs found

    Self Super-Resolution for Magnetic Resonance Images using Deep Networks

    Full text link
    High resolution magnetic resonance~(MR) imaging~(MRI) is desirable in many clinical applications, however, there is a trade-off between resolution, speed of acquisition, and noise. It is common for MR images to have worse through-plane resolution~(slice thickness) than in-plane resolution. In these MRI images, high frequency information in the through-plane direction is not acquired, and cannot be resolved through interpolation. To address this issue, super-resolution methods have been developed to enhance spatial resolution. As an ill-posed problem, state-of-the-art super-resolution methods rely on the presence of external/training atlases to learn the transform from low resolution~(LR) images to high resolution~(HR) images. For several reasons, such HR atlas images are often not available for MRI sequences. This paper presents a self super-resolution~(SSR) algorithm, which does not use any external atlas images, yet can still resolve HR images only reliant on the acquired LR image. We use a blurred version of the input image to create training data for a state-of-the-art super-resolution deep network. The trained network is applied to the original input image to estimate the HR image. Our SSR result shows a significant improvement on through-plane resolution compared to competing SSR methods.Comment: Accepted by IEEE International Symposium on Biomedical Imaging (ISBI) 201

    Mjolnir: Extending HAMMER Using a Diffusion Transformation Model and Histogram Equalization for Deformable Image Registration

    Get PDF
    Image registration is a crucial step in many medical image analysis procedures such as image fusion, surgical planning, segmentation and labeling, and shape comparison in population or longitudinal studies. A new approach to volumetric intersubject deformable image registration is presented. The method, called Mjolnir, is an extension of the highly successful method HAMMER. New image features in order to better localize points of correspondence between the two images are introduced as well as a novel approach to generate a dense displacement field based upon the weighted diffusion of automatically derived feature correspondences. An extensive validation of the algorithm was performed on T1-weighted SPGR MR brain images from the NIREP evaluation database. The results were compared with results generated by HAMMER and are shown to yield significant improvements in cortical alignment as well as reduced computation time

    An Eulerian PDE approach for computing tissue thickness

    Get PDF
    ©2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TMI.2003.817775We outline an Eulerian framework for computing the thickness of tissues between two simply connected boundaries that does not require landmark points or parameterizations of either boundary. Thickness is defined as the length of correspondence trajectories, which run from one tissue boundary to the other, and which follow a smooth vector field constructed in the region between the boundaries. A pair of partial differential equations (PDEs) that are guided by this vector field are then solved over this region, and the sum of their solutions yields the thickness of the tissue region. Unlike other approaches, this approach does not require explicit construction of any correspondence trajectories. An efficient, stable, and computationally fast solution to these PDEs is found by careful selection of finite differences according to an upwinding condition. The behavior and performance of our method is demonstrated on two simulations and two magnetic resonance imaging data sets in two and three dimensions. These experiments reveal very good performance and show strong potential for application in tissue thickness visualization and quantification

    CT and MRI fusion for postimplant prostate brachytherapy evaluation

    Get PDF
    Postoperative evaluation of prostate brachytherapy is typically performed using CT, which does not have sufficient soft tissue contrast for accurate anatomy delineation. MR-CT fusion enables more accurate localization of both anatomy and implanted radioactive seeds, and hence, improves the accuracy of postoperative dosimetry. We propose a method for automatic registration of MR and CT images without a need for manual initialization. Our registration method employs a point-to-volume registration scheme during which localized seeds in the CT images, produced by commercial treatment planning systems as part of the standard of care, are rigidly registered to preprocessed MRI images. We tested our algorithm on ten patient data sets and achieved an overall registration error of 1.6 ± 0.8 mm with a running time of less than 20s. With high registration accuracy and computational speed, and no need for manual intervention, our method has the potential to be employed in clinical applications
    corecore